To address the requirement for efficient registration and fusion of unregistered infrared and visible images, we propose RRFNet-DCDAN, a unified infrared?visible image registration and fusion network incorporating a dynamic convolutional dual attention mechanism. This model enables adaptive fusion of unregistered multimodal images with different resolutions. RRFNet-DCDAN combines a dual attention network (DAN) and dynamic convolution attention (DCA) module. The DAN is employed to emphasize the salient features of infrared targets and preserve the texture details of visible images. Simultaneously, the DCA efficiently handles input image pairs with different resolutions. Based on the MSRS and RoadScene datasets, the proposed method outperforms other benchmark models (e.g., GTF and DenseFuse) in five evaluation metrics (e.g., mutual information and visual fidelity). Meanwhile, the image processing time is reduced by 20.57%. The proposed method provides a promising way for processing images with different resolutions.
Super-resolution microscopy has broken through the diffraction limit of conventional optical microscopy, providing a crucial tool for high-resolution optical imaging at the sub-cellular scale. This breakthrough has significantly advanced research in frontier disciplines such as cell biology, neuroscience, and pathology. Among various super-resolution techniques, structured illumination super-resolution microscopy (SIM) technology has emerged as a vital method for studying the dynamics of fine structures in live cells, owing to its rapid imaging capability, low phototoxicity, and excellent compatibility with fluorescent probes. With the rapid development of artificial intelligence, deep learning has injected new momentum into SIM technology. Deep learning-enhanced SIM has achieved groundbreaking progress in reducing phototoxicity, increasing imaging speed, improving resolution, and expanding functional applications, greatly broadening its scope. This review systematically summarizes recent advances in deep learning-driven SIM technology and provides a perspective on future developments in the field.
With the widespread deployment of high-performance sensing systems in applications such as remote sensing monitoring, intelligent surveillance, and urban management, increasingly stringent requirements have been placed on imaging systems in terms of spatial coverage and image detail resolution. However, traditional optical imaging systems are fundamentally limited by the optical system's space-bandwidth product, which defines the trade-off between field of view and resolution. To address this limitation, wide-area high-resolution imaging systems have become a major research focus in modern optical imaging research. This paper focuses on the inherent trade-off between field of view and resolution in conventional systems and presents a systematic review of four representative imaging architectures: single-device scanning systems, multi-chip mosaic systems, multi-camera array systems, and multi-scale imaging systems. Each architecture is examined in terms of imaging principles, system configuration, technical challenges, and application suitability, along with a comparative evaluation of its respective strengths and limitations. Furthermore, by grounding the discussion in the theories of space-bandwidth product and lens scaling laws, the paper reveals the physical constraints of traditional systems and explores the future potential of multi-camera architectures in areas such as multidimensional imaging, high-speed video, large dynamic range, and multimodal sensing. These insights provide theoretical guidance and strategic direction for the development of next-generation intelligent imaging systems.
Traditional optoelectronic imaging technology, based on the principle of direct visual representation, fundamentally involves the direct and uniform sampling and reproduction of two-dimensional light field intensity signals in spatial dimensions. However, constrained by the physical and technological limitations of optical imaging systems and optoelectronic detectors, it has become increasingly insufficient to meet the growing demands for high resolution, high sensitivity, and multidimensional high-speed imaging in many fields. Computational imaging tightly integrates optical control in the physical domain with information processing in the digital domain. It offers innovative solutions to overcome the limitations of traditional imaging technologies, becoming the future direction of advanced optical imaging. From the perspective of the imaging chain, the optical control methods of computational optical imaging can be categorized into illumination control, optical system control, object control, and detector control. We focus on detector control—by introducing encoding devices (such as displacers, masks, and spatial light modulators) at the focal plane of the image sensor at the end of the imaging chain to regulate the high-dimensional light field. Coupled with post-processing reconstruction algorithms, this approach enables the decoupling of intensity, phase, 3D structure, light field information, and spectrum, paving the way for high-performance optoelectronic imaging and detection. These methods have the potential to overcome the inherent bottlenecks of traditional optoelectronic imaging, such as limited imaging dimensions and single-mode imaging, providing new avenues for achieving high resolution, multidimensional, hyperspectral, miniaturized, and ultrafast imaging.